Trees and Nets

Kerry Back

BUSI 520, Fall 2022
JGSB, Rice University

Neural networks

  • A multi-layer perceptron (MLP) consists of “neurons” arranged in layers.
  • A neuron is really a mathematical function. It takes inputs (real numbers) \(x_1, \ldots, x_n\), calculates a function \(y=f(x_1, \ldots, x_n)\) and passes \(y\) to the neurons in the next level.
  • The inputs in the first layer are the independent variables in a row of the data.
  • The inputs in successive layers are the calculations from the prior level. There can be a different number of inputs in different layers (depending on the number of neurons in the prior level).
  • The last layer is a single neuron that produces the output.

Illustration

4 inputs, 5 different functions of the input are calculated in the “hidden layer.”

The output is a function of the 5 numbers calculated in the hidden layer.

Overview

  • We’re using quarto as a front-end to reveal.js to create html slides.

  • Our quarto qmd file contains markdown, latex, and python code.

  • We’re using the solarized theme.

  • reveal.js slides can also be created in a point and click manner (including latex equations and more) at slides.com.

A numbered list

  1. Point 1
  2. Point 2
  3. Point 3 involves some math \(y=\sqrt{x}\).
  4. Point 4 involves more math.

\[y = \frac{\log x}{\sqrt{x}}\]

Another slide with text and math

Here is the first part.

Here is an equation.

\[ f(x) = \int_0^x t^{2}\, \mathrm{d} t \]

Here is another equation.

\[ g(x) = \int_0^x t^{3}\, \mathrm{d} t \]

Playing around some

\[ f(x) = \int_0^x t^{2}\, \mathrm{d} t \]

Playing around some

\[ f(x) = \int_0^x t^{2}\, \mathrm{d} t \]

\[ g(x) = \int_0^x t^{3}\, \mathrm{d} t \]

Playing around some

\[ g(x) = \int_0^x t^{3}\, \mathrm{d} t \]

Playing around some

\[ f(x) = \int_0^x t^{2}\, \mathrm{d} t \] \[ g(x) = \int_0^x t^{3}\, \mathrm{d} t \] \[ h(x) = \int_0^x t^{4}\, \mathrm{d} t \]

Generate a figure

This is produced within the qmd file.

View a website

Here is a website.